Goto

Collaborating Authors

 tree canopy


Planning for Cooler Cities: A Multimodal AI Framework for Predicting and Mitigating Urban Heat Stress through Urban Landscape Transformation

Yi, Shengao, Li, Xiaojiang, Tu, Wei, Zhao, Tianhong

arXiv.org Artificial Intelligence

As extreme heat events intensify due to climate change and urbanization, cities face increasing challenges in mitigating outdoor heat stress. While traditional physical models such as SOLWEIG and ENVI-met provide detailed assessments of human-perceived heat exposure, their computational demands limit scalability for city-wide planning. In this study, we propose GSM-UTCI, a multimodal deep learning framework designed to predict daytime average Universal Thermal Climate Index (UTCI) at 1-meter hyperlocal resolution. The model fuses surface morphology (nDSM), high-resolution land cover data, and hourly meteorological conditions using a feature-wise linear modulation (FiLM) architecture that dynamically conditions spatial features on atmospheric context. Trained on SOLWEIG-derived UTCI maps, GSM-UTCI achieves near-physical accuracy, with an R2 of 0.9151 and a mean absolute error (MAE) of 0.41°C, while reducing inference time from hours to under five minutes for an entire city. To demonstrate its planning relevance, we apply GSM-UTCI to simulate systematic landscape transformation scenarios in Philadelphia, replacing bare earth, grass, and impervious surfaces with tree canopy. Results show spatially heterogeneous but consistently strong cooling effects, with impervious-to-tree conversion producing the highest aggregated benefit (-4.18°C average change in UTCI across 270.7 km2). Tract-level bivariate analysis further reveals strong alignment between thermal reduction potential and land cover proportions. These findings underscore the utility of GSM-UTCI as a scalable, fine-grained decision support tool for urban climate adaptation, enabling scenario-based evaluation of greening strategies across diverse urban environments.


Explainable few-shot learning workflow for detecting invasive and exotic tree species

Gevaert, Caroline M., Pedro, Alexandra Aguiar, Ku, Ou, Cheng, Hao, Chandramouli, Pranav, Javan, Farzaneh Dadrass, Nattino, Francesco, Georgievska, Sonja

arXiv.org Artificial Intelligence

Deep Learning methods are notorious for relying on extensive labeled datasets to train and assess their performance. This can cause difficulties in practical situations where models should be trained for new applications for which very little data is available. While few-shot learning algorithms can address the first problem, they still lack sufficient explanations for the results. This research presents a workflow that tackles both challenges by proposing an explainable few-shot learning workflow for detecting invasive and exotic tree species in the Atlantic Forest of Brazil using Unmanned Aerial Vehicle (UAV) images. By integrating a Siamese network with explainable AI (XAI), the workflow enables the classification of tree species with minimal labeled data while providing visual, case-based explanations for the predictions. Results demonstrate the effectiveness of the proposed workflow in identifying new tree species, even in data-scarce conditions. With a lightweight backbone, e.g., MobileNet, it achieves a F1-score of 0.86 in 3-shot learning, outperforming a shallow CNN. A set of explanation metrics, i.e., correctness, continuity, and contrastivity, accompanied by visual cases, provide further insights about the prediction results. This approach opens new avenues for using AI and UAVs in forest management and biodiversity conservation, particularly concerning rare or under-studied species.


Seeing the roads through the trees: A benchmark for modeling spatial dependencies with aerial imagery

Robinson, Caleb, Corley, Isaac, Ortiz, Anthony, Dodhia, Rahul, Ferres, Juan M. Lavista, Najafirad, Peyman

arXiv.org Artificial Intelligence

Fully understanding a complex high-resolution satellite or aerial imagery scene often requires spatial reasoning over a broad relevant context. The human object recognition system is able to understand object in a scene over a long-range relevant context. For example, if a human observes an aerial scene that shows sections of road broken up by tree canopy, then they will be unlikely to conclude that the road has actually been broken up into disjoint pieces by trees and instead think that the canopy of nearby trees is occluding the road. However, there is limited research being conducted to understand long-range context understanding of modern machine learning models. In this work we propose a road segmentation benchmark dataset, Chesapeake Roads Spatial Context (RSC), for evaluating the spatial long-range context understanding of geospatial machine learning models and show how commonly used semantic segmentation models can fail at this task. For example, we show that a U-Net trained to segment roads from background in aerial imagery achieves an 84% recall on unoccluded roads, but just 63.5% recall on roads covered by tree canopy despite being trained to model both the same way. We further analyze how the performance of models changes as the relevant context for a decision (unoccluded roads in our case) varies in distance. We release the code to reproduce our experiments and dataset of imagery and masks to encourage future research in this direction -- https://github.com/isaaccorley/ChesapeakeRSC.


Google unveils AI-powered planning tools to help beat climate change's extreme heat

Engadget

With extreme weather events regularly flooding our coastal cities and burning out our rural communities, Google in its magnanimity has developed a new set of online tools that civil servants and community organizers alike can use in their efforts to stave off climate change-induced catastrophe. Google already pushes extreme weather alerts to users in affected locations, providing helpful, easy-to-understand information about the event through the Search page -- whether its a winter storm warning, flood advisories, tornado warnings, or what have you. The company has now added extreme heat alerts to that list. Googling details on the event will return everything from the predicted start and end dates of the heatwave to medical issues to be aware of during it and how to mitigate their impacts. The company is partnering with the Global Heat Health Information Network (GHHIN) to ensure that the information provided is both accurate and applicable.


Lost cities of the Amazon are discovered after being hidden under the tree canopies for centuries

Daily Mail - Science & tech

A newly discovered network of'lost' ancient cities has been discovered in the Amazon, using lidar technology – dubbed'lasers in the sky' – to peer through the tropical forest canopy. The cities, built by the Casarabe communities between 500-1400 AD, are located in the Llanos de Mojos savannah-forest, Bolivia, and have been hidden under the thick tree canopies for centuries. They feature an array of elaborate and intricate structures unlike any previously discovered in the region, including 16ft-high terraces covering 54 acres – the equivalent of 30 football pitches – and 69ft-tall conical pyramids. The international team of researchers from the UK and Germany also found a vast network of reservoirs, causeways and checkpoints, spanning several miles. The discovery challenges the view of Amazonia as a historically'pristine' landscape, the researchers say, showing it was instead home to an early'urbanism' created and managed by indigenous populations for thousands of years.


An autonomous drone for search and rescue in forests using optical sectioning algorithm

#artificialintelligence

A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria. Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down.


Artificial Intelligence helps mapping urban trees (all of them)

#artificialintelligence

Former New York Times cartographer Tim Wallace describes how his current firm, Santa Fe-based Descartes Labs, has built a machine learning model to identify tree canopy from satellite imagery thus making accurate mapping of trees and urban forests far more accessible to cities worldwide. "The ability to map tree canopy at a such a high resolution in areas that can't be easily reached on foot would be helpful for utility companies to pinpoint encroachment issues--or for municipalities to find possible trouble spots beyond their official tree census (if they even have one)," writes Wallace. For example, unexpected tree deserts can be identified and neighborhoods that would most benefit from a surge of saplings revealed."


Mapping All of the Trees with Machine Learning – descarteslabs-team – Medium

#artificialintelligence

All this fuss is not without good reason. They make oxygen for breathing, suck up CO₂, provide shade, reduce noise pollution, and just look at them -- they're beautiful! The thing is, though, that trees are pretty hard to map. The 124,795 trees in the San Francisco Urban Forest Map shown below, for example, were cataloged over a year of survey work by a team of certified arborists. The database they created is thorough, with information on tree species and size as well as environmental factors like the presence of power lines or broken pavement.